Record: 0.4311 BPB - Complementary Training + Backoff N-gram Mixer + TTT#1033
Record: 0.4311 BPB - Complementary Training + Backoff N-gram Mixer + TTT#1033Naazimsnh02 wants to merge 2 commits intoopenai:mainfrom
Conversation
|
This submission violates Condition 2 as defined in #1017. The n-gram scoring method receives the realized target token as an argument and evaluates the n-gram probability exclusively at that token: neither the hash-based higher-order lookup nor the unigram fallback ever construct a distribution over the full vocabulary. The mixture of neural and n-gram components therefore operates on scalars rather than on committed probability vectors. No full distribution is defined before the realized token is observed, which means no codebook exists from which a message could actually be decompressed, and the quantity being reported is not a prequential code length. The entropy-adaptive mixing weight compounds this. It is a scalar functional of the neural distribution used to modulate the contribution of a component that was itself never normalized over the vocabulary, which is the exact pattern listed in Section VI of #1017. You can also check out #995 for more information on this. The remaining machinery (causal cache updates, score-first TTT) appears sound, but Condition 2 is load-bearing and it is not satisfied here. |
Community Review — Record: 0.4311 BPB - Complementary Training + Backoff N-gram Mixer + TTTBPB: 0.4311 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1153 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=11, vocab=1024, code=94358 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=11, vocab=1024, code=94358 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
val_bpb: 0.4311 (3-seed mean, std < 0.0001) | ~15.9 MB | 8xH100 SXM | 600s train + ~562s eval
Key Innovation: Complementary Training
Standard approach: train model on uniform cross-entropy, bolt on n-gram cache at eval.
Our approach: during training, downweight tokens that a bigram predictor would get right (COMPLEMENT_ALPHA=0.5). The model learns to focus its 27M parameters on tokens that statistical caches can't predict — novel word choices, long-range dependencies, semantic surprises. This creates a natural division of labor between the neural model and the n-gram cache at eval time.
3-Seed Results
Training stopped at 600s (~6976 steps). Full eval (diag + q_rt + q_sw + TTT + ngram) completes in ~562s ≈ 9.37 min.
Architecture
11L 512d GQA 8/4, MLP 3.0x, XSA-4, LeakyReLU(0.5)², BigramHash(2048), Int6 + LZMA.
VRL (Value Residual Learning), SmearGate, Partial RoPE (16 dims), U-Net skip connections, EMA + SWA.
Eval Stack
0.20 + 0.55 · sigmoid(2 · (H − 3.0))— n-gram gets 20–75% weight based on model uncertaintyCompliance
Credits
This builds on community work:
Our contribution: Validated Complementary Training as a first-class technique that meaningfully improves n-gram mixer performance by specializing the neural model on statistically-hard tokens.